232 research outputs found

    DropIn: Making Reservoir Computing Neural Networks Robust to Missing Inputs by Dropout

    Full text link
    The paper presents a novel, principled approach to train recurrent neural networks from the Reservoir Computing family that are robust to missing part of the input features at prediction time. By building on the ensembling properties of Dropout regularization, we propose a methodology, named DropIn, which efficiently trains a neural model as a committee machine of subnetworks, each capable of predicting with a subset of the original input features. We discuss the application of the DropIn methodology in the context of Reservoir Computing models and targeting applications characterized by input sources that are unreliable or prone to be disconnected, such as in pervasive wireless sensor networks and ambient intelligence. We provide an experimental assessment using real-world data from such application domains, showing how the Dropin methodology allows to maintain predictive performances comparable to those of a model without missing features, even when 20\%-50\% of the inputs are not available

    Topographic mapping for quality inspection and intelligent filtering of smart-bracelet data

    Get PDF
    Wrist-worn wearable devices equipped with heart activity sensors can provide valuable data that can be used for preventative health. However, hearth activity analysis from these devices suffers from noise introduced by motion artifacts. Methods traditionally used to remove outliers based on motion data can yield to discarding clean data, if some movement was present, and accepting noisy data, i.e., subject was still but the sensor was misplaced. This work shows that self-organizing maps (SOMs) can be used to effectively accept or reject sections of heart data collected from unreliable devices, such as wrist-worn devices. In particular, the proposed SOM-based filter can accept a larger amount of measurements (less false negatives) with an higher overall quality with respect to methods solely based on statistical analysis of motion data. We provide an empirical analysis on real-world wearable data, comprising heart and motion data of users. We show how topographic mapping can help identifying and interpreting patterns in the sensor data and help relating them to an assessment of user state. More importantly, our experimental results show the proposed approach is able to retain almost twice the amount of data while keeping samples with an error that is an order of magnitude lower with respect to a filter based on accelerometric data

    A compositional model to characterize software and hardware from their resource usage

    Get PDF

    Unifying hardware and software benchmarking: a resource-agnostic model

    Get PDF
    Lilja (2005) states that “In the field of computer science and engineering there is surprisingly little agreement on how to measure something as fun- damental as the performance of a computer system.”. The field lacks of the most fundamental element for sharing measures and results: an appropriate metric to express performance. Since the introduction of laptops and mobile devices, there has been a strong research focus towards the energy efficiency of hardware. Many papers, both from academia and industrial research labs, focus on methods and ideas to lower power consumption in order to lengthen the battery life of portable device components. Much less effort has been spent on defining the responsibility of software in the overall computational system energy consumption. Some attempts have been made to describe the energy behaviour of software, but none of them abstract from the physical machine where the measurements were taken. In our opinion this is a strong drawback because results can not be generalized. In this work we attempt to bridge the gap between characterization and prediction, of both hardware and software, of performance and energy, in a single unified model. We propose a model designed to be as simple as possible, generic enough to be abstract from the specific resource being described or predicted (applying to both time, memory and energy), but also concrete and practical, allowing useful and precise performance and energy predictions. The model applies to the broadest set of resource possible. We focus mainly on time and memory (hence bridging hardware benchmarking and classical algorithms time complexity), and energy consumption. To ensure a wide applicability of the model in real world scenario, the model is completely black-box, it does not require any information about the source code of the program, and only relies on external metrics, like completion time, energy consumption, or performance counters. Extending the benchmarking model, we define the notion of experimental computational complexity, as the characterization of how the resource usage changes as the input size grows. Finally, we define a high-level energy model capable of characterizing the power consumption of computers and clusters, in terms of the usage of resources as defined by our benchmarking model. We tested our model in four experiments: Expressiveness: we show the close relationship between energy and clas- sical theoretical complexity, also showing that our experimental com- putational complexity is expressive enough to capture interesting be- haviour of programs simply analysing their resource usage. Performance prediction we use the large database of performance mea- sures available on the CPU SPEC website to train our model and predict the performance of the CPU SPEC suite on randomly selected computers. Energy profiling: we tested our model to characterize and predict the power usage of a cluster running OpenFOAM, changing the number of active nodes and cores. Scheduling: on heterogeneous systems applying our performance pre- diction model to features of programs extracted at runtime, we predict the device where is most convenient to execute the programs, in an heterogeneous system

    Studio sulle proprietĂ  di assorbimento energetico degli algoritmi

    Get PDF
    Si propone una metodologia di misurazione del consumo energetico dei programmi con una unità di misura che renda i risultati indipendenti dall’Hardware usato. Si analizzano inoltre i risultati studiando le proprietà energetiche di algoritmi noti e di architetture multicore

    Atti diplomatici romani, 338-270 a.C. Cronologia e contesto storico

    Get PDF
    Il soggetto della tesi è l'analisi degli atti diplomatici romani stipulati durante la conquista romana dell'Italia. Questi atti sono principalmente foedera, paces, societates e amicitiae, con i loro equivalenti greci. A partire dall'analisi sistematica degli atti diplomatici, ho dimostrato che l'attività diplomatica roman portò alla conquista dell'Italia tanto quanto l'attività militare. L'azione diplomatica dei Romani nel panorama politico italico fu differente da quella di altre potenze; di conseguenza, gli atti diplomatici Romani erano molto elaborati (per quanto ci è dato vedere). Infine, molte fonti che sembrano a prima vista incoerenti acquistano senso se analizzate a partire da un'analisi della situazione diplomatica. Le conclusioni riguardano la strategia geopolitica e diplomatica dei Romani fra IV e III sec. a.C. I Romani usarono la diplomazia come strumento di conquista. Furono sofisticati nel redigere gli atti diplomatici, scegliendo con cura clausole e termini e usandoli per promuovere la presenza romana fra altre popolazioni italiche, ampliando i propri orizzonti diplomatici. Con la diplomazia, i Romani presero contatti con molte presenze politiche fra i popoli italici e italioti; stipularono paci, passarono di guerra in guerra - provocandone alcune più utili per loro stessi - , conclusero alleanze che risultarono nell'ampliamento dell'esercito romano, colonizzarono territori, tennero d'occhio le potenze che non erano ancora sotto il loro dominio.The main subject is the analysis of Roman diplomatic acts stipulated during the Roman conquest of Italy. These acts are mainly foedera, paces, societates and amicitiae, with their Greek equivalents. Starting from the systematic analysis of diplomatic acts, I have argued that Roman diplomatic action led to the conquest of Italy as much as military action. Moreover, Roman diplomatic action in the Italian political landscape was different from other powers; subsequently, Roman diplomatic acts (as far as we can notice) were much elaborated. Finally, many sources thought to be incoherent acquire sense if they are read within a diplomatic analysis. My conclusions concern Roman geopolitical and diplomatic strategy between the IV and III centuries BC. The Romans used diplomacy as a tool of conquest. They were sophisticated in redacting diplomatic acts, carefully choosing clauses and words, and they used them to promote Roman presence among other Italic peoples, widening their diplomatic horizon. With diplomacy, the Romans took contact with many political presences among Italic and Italiote peoples; they made peace, moving onto other wars; they provoked useful wars for them; they made alliances that provided also military enlargements for the Roman army; they colonized territories; they carefully kept an eye on the powers that were not yet under their dominion

    Dual-Branch Collaborative Transformer for Virtual Try-On

    Get PDF
    Image-based virtual try-on has recently gained a lot of attention in both the scientific and fashion industry communities due to its challenging setting and practical real-world applications. While pure convolutional approaches have been explored to solve the task, Transformer-based architectures have not received significant attention yet. Following the intuition that self- and cross-attention operators can deal with long-range dependencies and hence improve the generation, in this paper we extend a Transformer-based virtual try-on model by adding a dual-branch collaborative module that can exploit cross-modal information at generation time. We perform experiments on the VITON dataset, which is the standard benchmark for the task, and on a recently collected virtual try-on dataset with multi-category clothing, Dress Code. Experimental results demonstrate the effectiveness of our solution over previous methods and show that Transformer-based architectures can be a viable alternative for virtual try-on

    Multimodal Garment Designer: Human-Centric Latent Diffusion Models for Fashion Image Editing

    Full text link
    Fashion illustration is used by designers to communicate their vision and to bring the design idea from conceptualization to realization, showing how clothes interact with the human body. In this context, computer vision can thus be used to improve the fashion design process. Differently from previous works that mainly focused on the virtual try-on of garments, we propose the task of multimodal-conditioned fashion image editing, guiding the generation of human-centric fashion images by following multimodal prompts, such as text, human body poses, and garment sketches. We tackle this problem by proposing a new architecture based on latent diffusion models, an approach that has not been used before in the fashion domain. Given the lack of existing datasets suitable for the task, we also extend two existing fashion datasets, namely Dress Code and VITON-HD, with multimodal annotations collected in a semi-automatic manner. Experimental results on these new datasets demonstrate the effectiveness of our proposal, both in terms of realism and coherence with the given multimodal inputs. Source code and collected multimodal annotations will be publicly released at: https://github.com/aimagelab/multimodal-garment-designer
    • …
    corecore